Non-smooth Non-convex Bregman Minimization: Unification and new Algorithms
نویسندگان
چکیده
We propose a unifying algorithm for non-smooth non-convex optimization. The algorithm approximates the objective function by a convex model function and finds an approximate (Bregman) proximal point of the convex model. This approximate minimizer of the model function yields a descent direction, along which the next iterate is found. Complemented with an Armijo-like line search strategy, we obtain a flexible algorithm for which we prove (subsequential) convergence to a stationary point under weak assumptions on the growth of the model function error. Special instances of the algorithm with a Euclidean distance function are, for example, Gradient Descent, Forward–Backward Splitting, ProxDescent, without the common requirement of a “Lipschitz continuous gradient”. In addition, we consider a broad class of Bregman distance functions (generated by Legendre functions) replacing the Euclidean distance. The algorithm has a wide range of applications including many linear and non-linear inverse problems in image processing and machine learning.
منابع مشابه
Bregmanized Domain Decomposition for Image Restoration
Computational problems of large-scale appearing in biomedical imaging, astronomy, art restoration, and data analysis are gaining recently a lot of attention due to better hardware, higher dimensionality of images and data sets, more parameters to be measured, and an increasing number of data acquired. In the last couple of years non-smooth minimization problems such as total variation minimizat...
متن کاملNew construction and proof techniques of projection algorithm for countable maximal monotone mappings and weakly relatively non-expansive mappings in a Banach space
In a real uniformly convex and uniformly smooth Banach space, some new monotone projection iterative algorithms for countable maximal monotone mappings and countable weakly relatively non-expansive mappings are presented. Under mild assumptions, some strong convergence theorems are obtained. Compared to corresponding previous work, a new projection set involves projection instead of generalized...
متن کاملA Modular Analysis of Adaptive (Non-)Convex Optimization: Optimism, Composite Objectives, and Variational Bounds
Recently, much work has been done on extending the scope of online learning and incremental stochastic optimization algorithms. In this paper we contribute to this effort in two ways: First, based on a new regret decomposition and a generalization of Bregman divergences, we provide a self-contained, modular analysis of the two workhorses of online learning: (general) adaptive versions of Mirror...
متن کاملUniVR: A Universal Variance Reduction Framework for Proximal Stochastic Gradient Method
We revisit an important class of composite stochastic minimization problems that often arises from empirical risk minimization settings, such as Lasso, Ridge Regression, and Logistic Regression. We present a new algorithm UniVR based on stochastic gradient descent with variance reduction. Our algorithm supports non-strongly convex objectives directly, and outperforms all of the state-of-the-art...
متن کاملحل مسئله پخش بار بهینه در شرایط نرمال و اضطراری با استفاده از الگوریتم ترکیبی گروه ذرات و نلدر مید (PSO-NM)
In this paper, solving optimal power flow problem has been investigated by using hybrid particle swarm optimization and Nelder Mead Algorithms. The goal of combining Nelder-Mead (NM) simplex method and particle swarm optimization (PSO) is to integrate their advantages and avoid their disadvantages. NM simplex method is a very efficient local search procedure but its convergence is extremely sen...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید
ثبت ناماگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید
ورودعنوان ژورنال:
- CoRR
دوره abs/1707.02278 شماره
صفحات -
تاریخ انتشار 2017